On Langevin Updating in Multilayer Perceptrons

نویسنده

  • Thorsteinn S. Rögnvaldsson
چکیده

The Langevin updating rule, in which noise is added to the weights during learning, is presented and shown to improve learning on problems with initially ill-conditioned Hessians. This is particularly important for multilayer perceptrons with many hidden layers, that often have ill-conditioned Hessians. In addition, Manhattan updating is shown to have a similar eeect.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incremental Learning of Auto-Association Multilayer Perceptrons Network

This paper introduces a new algorithm to reduce the time of updating the weights of auto-association multilayer perceptrons network. The basic idea is to modify the singular value decomposition which has been used in the batch algorithm to update the weights whenever a new row is added to the input matrix. The computation analysis and the experiments show that the new algorithm speeds up the im...

متن کامل

Error back-propagation algorithm for classification of imbalanced data

Classification of imbalanced data is pervasive but it is a difficult problem to solve. In order to improve the classification of imbalanced data, this letter proposes a new error function for the error backpropagation algorithm of multilayer perceptrons. The error function intensifies weight-updating for the minority class and weakens weight-updating for the majority class. We verify the effect...

متن کامل

Support Vector Machine Based Facies Classification Using Seismic Attributes in an Oil Field of Iran

Seismic facies analysis (SFA) aims to classify similar seismic traces based on amplitude, phase, frequency, and other seismic attributes. SFA has proven useful in interpreting seismic data, allowing significant information on subsurface geological structures to be extracted. While facies analysis has been widely investigated through unsupervised-classification-based studies, there are few cases...

متن کامل

Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks

In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolut...

متن کامل

Natural Gradient Works Eciently in Learning

When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation) and the space of linear dynamical systems (for blind source deconvoluti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural Computation

دوره 6  شماره 

صفحات  -

تاریخ انتشار 1994